Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
2022 CHI Conference on Human Factors in Computing Systems, CHI 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1874721

ABSTRACT

Data visualisations are increasingly used online to engage readers and enable independent analysis of the data underlying news stories. However, access to such infographics is problematic for readers who are blind or have low vision (BLV). Equitable access to information is a basic human right and essential for independence and inclusion. We introduce infosonics, the audio equivalent of infographics, as a new style of interactive sonification that uses a spoken introduction and annotation, non-speech audio and sound design elements to present data in an understandable and engaging way. A controlled user evaluation with 18 BLV adults found a COVID-19 infosonic enabled a clearer mental image than a traditional sonification. Further, infosonics prove complementary to text descriptions and facilitate independent understanding of the data. Based on our findings, we provide preliminary suggestions for infosonics design, which we hope will enable BLV people to gain equitable access to online news and information. © 2022 ACM.

2.
J Cancer Res Clin Oncol ; 148(9): 2497-2505, 2022 Sep.
Article in English | MEDLINE | ID: covidwho-1427250

ABSTRACT

PURPOSE: Non-melanoma skin cancer (NMSC) is the most frequent keratinocyte-origin skin tumor. It is confirmed that dermoscopy of NMSC confers a diagnostic advantage as compared to visual face-to-face assessment. COVID-19 restrictions diagnostics by telemedicine photos, which are analogous to visual inspection, displaced part of in-person visits. This study evaluated by a dual convolutional neural network (CNN) performance metrics in dermoscopic (DI) versus smartphone-captured images (SI) and tested if artificial intelligence narrows the proclaimed gap in diagnostic accuracy. METHODS: A CNN that receives a raw image and predicts malignancy, overlaid by a second independent CNN which processes a sonification (image-to-sound mapping) of the original image, were combined into a unified malignancy classifier. All images were histopathology-verified in a comparison between NMSC and benign skin lesions excised as suspected NMSCs. Study criteria outcomes were sensitivity and specificity for the unified output. RESULTS: Images acquired by DI (n = 132 NMSC, n = 33 benign) were compared to SI (n = 170 NMSC, n = 28 benign). DI and SI analysis metrics resulted in an area under the curve (AUC) of the receiver operator characteristic curve of 0.911 and 0.821, respectively. Accuracy was increased by DI (0.88; CI 81.9-92.4) as compared to SI (0.75; CI 68.1-80.6, p < 0.005). Sensitivity of DI was higher than SI (95.3%, CI 90.4-98.3 vs 75.3%, CI 68.1-81.6, p < 0.001), but not specificity (p = NS). CONCLUSION: Telemedicine use of smartphone images might result in a substantial decrease in diagnostic performance as compared to dermoscopy, which needs to be considered by both healthcare providers and patients.


Subject(s)
COVID-19 , Deep Learning , Skin Neoplasms , Algorithms , Artificial Intelligence , COVID-19/diagnostic imaging , Dermoscopy/methods , Humans , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology , Smartphone
3.
Front Psychol ; 12: 623110, 2021.
Article in English | MEDLINE | ID: covidwho-1207715

ABSTRACT

Since sound and music are powerful forces and drivers of human behavior and physiology, we propose the use of sonification to activate healthy breathing patterns in participants to induce relaxation. Sonification is often used in the context of biofeedback as it can represent an informational, non-invasive and real-time stimulus to monitor, motivate or modify human behavior. The first goal of this study is the proposal and evaluation of a distance-based biofeedback system using a tempo- and phase-aligned sonification strategy to adapt breathing patterns and induce states of relaxation. A second goal is the evaluation of several sonification stimuli on 18 participants that were recruited online and of which we analyzed psychometric and behavioral data using, respectively questionnaires and respiration rate and ratio. Sonification stimuli consisted of filtered noise mimicking a breathing sound, nature environmental sounds and a musical phrase. Preliminary results indicated the nature stimulus as most pleasant and as leading to the most prominent decrease of respiration rate. The noise sonification had the most beneficial effect on respiration ratio. While further research is needed to generalize these findings, this study and its methodological underpinnings suggest the potential of the proposed biofeedback system to perform ecologically valid experiments at participants' homes during the COVID-19 pandemic.

4.
BMC Bioinformatics ; 21(1): 431, 2020 Oct 02.
Article in English | MEDLINE | ID: covidwho-810440

ABSTRACT

BACKGROUND: This paper describes a web based tool that uses a combination of sonification and an animated display to inquire into the SARS-CoV-2 genome. The audio data is generated in real time from a variety of RNA motifs that are known to be important in the functioning of RNA. Additionally, metadata relating to RNA translation and transcription has been used to shape the auditory and visual displays. Together these tools provide a unique approach to further understand the metabolism of the viral RNA genome. This audio provides a further means to represent the function of the RNA in addition to traditional written and visual approaches. RESULTS: Sonification of the SARS-CoV-2 genomic RNA sequence results in a complex auditory stream composed of up to 12 individual audio tracks. Each auditory motive is derived from the actual RNA sequence or from metadata. This approach has been used to represent transcription or translation of the viral RNA genome. The display highlights the real-time interaction of functional RNA elements. The sonification of codons derived from all three reading frames of the viral RNA sequence in combination with sonified metadata provide the framework for this display. Functional RNA motifs such as transcription regulatory sequences and stem loop regions have also been sonified. Using the tool, audio can be generated in real-time from either genomic or sub-genomic representations of the RNA. Given the large size of the viral genome, a collection of interactive buttons has been provided to navigate to regions of interest, such as cleavage regions in the polyprotein, untranslated regions or each gene. These tools are available through an internet browser and the user can interact with the data display in real time. CONCLUSION: The auditory display in combination with real-time animation of the process of translation and transcription provide a unique insight into the large body of evidence describing the metabolism of the RNA genome. Furthermore, the tool has been used as an algorithmic based audio generator. These audio tracks can be listened to by the general community without reference to the visual display to encourage further inquiry into the science.


Subject(s)
Betacoronavirus/genetics , Genome, Viral , Software , Betacoronavirus/isolation & purification , COVID-19 , Coronavirus Infections/pathology , Coronavirus Infections/virology , Genomics , Humans , Open Reading Frames/genetics , Pandemics , Pneumonia, Viral/pathology , Pneumonia, Viral/virology , RNA, Viral/chemistry , RNA, Viral/genetics , RNA, Viral/metabolism , SARS-CoV-2
SELECTION OF CITATIONS
SEARCH DETAIL